182 research outputs found
Discussion of ``2004 IMS Medallion Lecture: Local Rademacher complexities and oracle inequalities in risk minimization'' by V. Koltchinskii
Discussion of ``2004 IMS Medallion Lecture: Local Rademacher complexities and
oracle inequalities in risk minimization'' by V. Koltchinskii [arXiv:0708.0083]Comment: Published at http://dx.doi.org/10.1214/009053606000001073 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
On non-asymptotic bounds for estimation in generalized linear models with highly correlated design
We study a high-dimensional generalized linear model and penalized empirical
risk minimization with penalty. Our aim is to provide a non-trivial
illustration that non-asymptotic bounds for the estimator can be obtained
without relying on the chaining technique and/or the peeling device.Comment: Published at http://dx.doi.org/10.1214/074921707000000319 in the IMS
Lecture Notes Monograph Series
(http://www.imstat.org/publications/lecnotes.htm) by the Institute of
Mathematical Statistics (http://www.imstat.org
-confidence sets in high-dimensional regression
We study a high-dimensional regression model. Aim is to construct a
confidence set for a given group of regression coefficients, treating all other
regression coefficients as nuisance parameters. We apply a one-step procedure
with the square-root Lasso as initial estimator and a multivariate square-root
Lasso for constructing a surrogate Fisher information matrix. The multivariate
square-root Lasso is based on nuclear norm loss with -penalty. We show
that this procedure leads to an asymptotically -distributed pivot, with
a remainder term depending only on the -error of the initial estimator.
We show that under -sparsity conditions on the regression coefficients
the square-root Lasso produces to a consistent estimator of the noise
variance and we establish sharp oracle inequalities which show that the
remainder term is small under further sparsity conditions on and
compatibility conditions on the design.Comment: 22 page
The additive model with different smoothness for the components
We consider an additive regression model consisting of two components
and , where the first component is in some sense "smoother" than the
second . Smoothness is here described in terms of a semi-norm on the class
of regression functions. We use a penalized least squares estimator of and show that the rate of convergence for is
faster than the rate of convergence for . In fact, both rates are
generally as fast as in the case where one of the two components is known. The
theory is illustrated by a simulation study. Our proofs rely on recent results
from empirical process theory.Comment: 26 pages, 4 figure
- β¦